EN FR
EN FR


Section: Partnerships and Cooperations

European Initiatives

FP7 & H2020 Projects

RESIBOTS
  • Title: Robots with animal-like resilience

  • Program: H2020

  • Type: ERC

  • Duration: May 2015 - April 2020

  • Coordinator: Inria

  • Inria contact: Jean Baptiste Mouret

  • Despite over 50 years of research in robotics, most existing robots are far from being as resilient as the simplest animals: they are fragile machines that easily stop functioning in difficult conditions. The goal of this proposal is to radically change this situation by providing the algorithmic foundations for low-cost robots that can autonomously recover from unforeseen damages in a few minutes. It is here contended that trial-and-error learning algorithms provide an alternate approach that does not require diagnostic, nor pre-defined contingency plans. In this project, we will develop and study a novel family of such learning algorithms that make it possible for autonomous robots to quickly discover compensatory behaviors.

ANDY
  • Title: Advancing Anticipatory Behaviors in Dyadic Human-Robot Collaboration

  • Programme: H2020

  • Type: ICT RIA (No. 731540)

  • Duration: January 2017 - December 2020

  • Coordinator: IIT

  • PI for Inria: Serena Ivaldi

  • Recent technological progress permits robots to actively and safely share a common workspace with humans. Europe currently leads the robotic market for safety-certified robots, by enabling robots to react to unintentional contacts. AnDy leverages these technologies and strengthens European leadership by endowing robots with the ability to control physical collaboration through intentional interaction.

    To achieve this interaction, AnDy relies on three technological and scientific breakthroughs. First, AnDy will innovate the way of measuring human whole-body motions by developing the wearable AnDySuit, which tracks motions and records forces. Second, AnDy will develop the AnDyModel, which combines ergonomic models with cognitive predictive models of human dynamic behavior in collaborative tasks, which are learned from data acquired with the AnDySuit. Third, AnDy will propose the AnDyControl, an innovative technology for assisting humans through predictive physical control, based on AnDyModel.

    By measuring and modeling human whole-body dynamics, AnDy provides robots with an entirely new level of awareness about human intentions and ergonomy. By incorporating this awareness on-line in the robot's controllers, AnDy paves the way for novel applications of physical human-robot collaboration in manufacturing, health-care, and assisted living.

    AnDy will accelerate take-up and deployment in these domains by validating its progress in several realistic scenarios. In the first validation scenario, the robot is an industrial collaborative robot, which tailors its controllers to individual workers to improve ergonomy. In the second scenario, the robot is an assistive exoskeleton which optimizes human comfort by reducing physical stress. In the third validation scenario, the robot is a humanoid, which offers assistance to a human while maintaining the balance of both.

  • Partners: Italian Institute of Technology (IIT, Italy, coordinator), Josef Stefan Institute (JSI, Slovenia), DLR (Germany), IMK Automotive Gmbh (Germany), XSens (Netherlands), AnyBody Technologies (Denmark)

Collaborations in European Programs, Except FP7 & H2020

HEAP
  • Program: CHIST-ERA

  • Project acronym: HEAP

  • Project title: HEAP: Human-Guided Learning and Benchmarking of Robotic Heap Sorting

  • Duration: March 2019–Feb. 2022

  • Coordinator: Gerhard Neumann (Univ. of Lincoln, UK)

  • PI for Inria: Serena Ivaldi

  • Other partners: Italian Insitute of Technology (Italy), Technische Universitat Wien (Austria), Idiap Research Institute (Switzerland), Inria

  • This project will provide scientific advancements for benchmarking, object recognition, manipulation and human-robot interaction. We focus on sorting a complex, unstructured heap of unknown objects –resembling nuclear waste consisting of a set of broken deformed bodies– as an instance of an extremely complex manipulation task. The consortium aims at building an end-to-end benchmarking framework, which includes rigorous scientific methodology and experimental tools for application in realistic scenarios. Benchmark scenarios will be developed with off-the-shelf manipulators and grippers, allowing to create an affordable setup that can be easily reproduced both physically and in simulation. We will develop benchmark scenarios with varying complexities, i.e., grasping and pushing irregular objects, grasping selected objects from the heap, identifying all object instances and sorting the objects by placing them into corresponding bins. We will provide scanned CAD models of the objects that can be used for 3D printing in order to recreate our benchmark scenarios. Benchmarks with existing grasp planners and manipulation algorithms will be implemented as baseline controllers that are easily exchangeable using ROS. The ability of robots to fully autonomously handle dense clutters or a heap of unknown objects has been very limited due to challenges in scene understanding, grasping, and decision making. Instead, we will rely on semi-autonomous approaches where a human operator can interact with the system (e.g. using tele-operation but not only) and giving high-level commands to complement the autonomous skill execution. The amount of autonomy of our system will be adapted to the complexity of the situation. We will also benchmark our semi-autonomous task execution with different human operators and quantify the gap to the current SOTA in autonomous manipulation. Building on our semi-autonomous control framework, we will develop a manipulation skill learning system that learns from demonstrations and corrections of the human operator and can therefore learn complex manipulations in a data-efficient manner. To improve object recognition and segmentation in cluttered heaps, we will develop new perception algorithms and investigate interactive perception in order to improve the robot's understanding of the scene in terms of object instances, categories and properties.